89 research outputs found

    On the Importance of Countergradients for the Development of Retinotopy: Insights from a Generalised Gierer Model

    Get PDF
    During the development of the topographic map from vertebrate retina to superior colliculus (SC), EphA receptors are expressed in a gradient along the nasotemporal retinal axis. Their ligands, ephrin-As, are expressed in a gradient along the rostrocaudal axis of the SC. Countergradients of ephrin-As in the retina and EphAs in the SC are also expressed. Disruption of any of these gradients leads to mapping errors. Gierer's (1981) model, which uses well-matched pairs of gradients and countergradients to establish the mapping, can account for the formation of wild type maps, but not the double maps found in EphA knock-in experiments. I show that these maps can be explained by models, such as Gierer's (1983), which have gradients and no countergradients, together with a powerful compensatory mechanism that helps to distribute connections evenly over the target region. However, this type of model cannot explain mapping errors found when the countergradients are knocked out partially. I examine the relative importance of countergradients as against compensatory mechanisms by generalising Gierer's (1983) model so that the strength of compensation is adjustable. Either matching gradients and countergradients alone or poorly matching gradients and countergradients together with a strong compensatory mechanism are sufficient to establish an ordered mapping. With a weaker compensatory mechanism, gradients without countergradients lead to a poorer map, but the addition of countergradients improves the mapping. This model produces the double maps in simulated EphA knock-in experiments and a map consistent with the Math5 knock-out phenotype. Simulations of a set of phenotypes from the literature substantiate the finding that countergradients and compensation can be traded off against each other to give similar maps. I conclude that a successful model of retinotopy should contain countergradients and some form of compensation mechanism, but not in the strong form put forward by Gierer

    Optimal learning rules for familiarity detection

    Get PDF
    It has been suggested that the mammalian memory system has both familiarity and recollection components. Recently, a high-capacity network to store familiarity has been proposed. Here we derive analytically the optimal learning rule for such a familiarity memory using a signalto- noise ratio analysis. We find that in the limit of large networks the covariance rule, known to be the optimal local, linear learning rule for pattern association, is also the optimal learning rule for familiarity discrimination. The capacity is independent of the sparseness of the patterns, as long as the patterns have a fixed number of bits set. The corresponding information capacity is 0.057 bits per synapse, less than typically found for associative networks

    Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise

    Get PDF
    Safaryan, K. et al. Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise. Sci. Rep. 7, 46550; doi: 10.1038/srep46550 (2017). This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ © The Author(s) 2017.Many forms of synaptic plasticity require the local production of volatile or rapidly diffusing substances such as nitric oxide. The nonspecific plasticity these neuromodulators may induce at neighboring non-active synapses is thought to be detrimental for the specificity of memory storage. We show here that memory retrieval may benefit from this non-specific plasticity when the applied sparse binary input patterns are degraded by local noise. Simulations of a biophysically realistic model of a cerebellar Purkinje cell in a pattern recognition task show that, in the absence of noise, leakage of plasticity to adjacent synapses degrades the recognition of sparse static patterns. However, above a local noise level of 20 %, the model with nonspecific plasticity outperforms the standard, specific model. The gain in performance is greatest when the spatial distribution of noise in the input matches the range of diffusion-induced plasticity. Hence non-specific plasticity may offer a benefit in noisy environments or when the pressure to generalize is strong.Peer reviewe

    A Modular Network Architecture Resolving Memory Interference through Inhibition

    Get PDF
    International audienceIn real learning paradigms like pavlovian conditioning, several modes of learning are associated, including generalization from cues and integration of specific cases in context. Associative memories have been shown to be interesting neuronal models to learn quickly specific cases but they are hardly used in realistic applications because of their limited storage capacities resulting in interferences when too many examples are considered. Inspired by biological considerations, we propose a modular model of associative memory including mechanisms to manipulate properly multimodal inputs and to detect and manage interferences. This paper reports experiments that demonstrate the good behavior of the model in a wide series of simulations and discusses its impact both in machine learning and in biological modeling

    Binary Willshaw learning yields high synaptic capacity for long-term familiarity memory

    Full text link
    We investigate from a computational perspective the efficiency of the Willshaw synaptic update rule in the context of familiarity discrimination, a binary-answer, memory-related task that has been linked through psychophysical experiments with modified neural activity patterns in the prefrontal and perirhinal cortex regions. Our motivation for recovering this well-known learning prescription is two-fold: first, the switch-like nature of the induced synaptic bonds, as there is evidence that biological synaptic transitions might occur in a discrete stepwise fashion. Second, the possibility that in the mammalian brain, unused, silent synapses might be pruned in the long-term. Besides the usual pattern and network capacities, we calculate the synaptic capacity of the model, a recently proposed measure where only the functional subset of synapses is taken into account. We find that in terms of network capacity, Willshaw learning is strongly affected by the pattern coding rates, which have to be kept fixed and very low at any time to achieve a non-zero capacity in the large network limit. The information carried per functional synapse, however, diverges and is comparable to that of the pattern association case, even for more realistic moderately low activity levels that are a function of network size.Comment: 20 pages, 4 figure

    Standard Anatomical and Visual Space for the Mouse Retina: Computational Reconstruction and Transformation of Flattened Retinae with the Retistruct Package

    Get PDF
    The concept of topographic mapping is central to the understanding of the visual system at many levels, from the developmental to the computational. It is important to be able to relate different coordinate systems, e.g. maps of the visual field and maps of the retina. Retinal maps are frequently based on flat-mount preparations. These use dissection and relaxing cuts to render the quasi-spherical retina into a 2D preparation. The variable nature of relaxing cuts and associated tears limits quantitative cross-animal comparisons. We present an algorithm, "Retistruct," that reconstructs retinal flat-mounts by mapping them into a standard, spherical retinal space. This is achieved by: stitching the marked-up cuts of the flat-mount outline; dividing the stitched outline into a mesh whose vertices then are mapped onto a curtailed sphere; and finally moving the vertices so as to minimise a physically-inspired deformation energy function. Our validation studies indicate that the algorithm can estimate the position of a point on the intact adult retina to within 8° of arc (3.6% of nasotemporal axis). The coordinates in reconstructed retinae can be transformed to visuotopic coordinates. Retistruct is used to investigate the organisation of the adult mouse visual system. We orient the retina relative to the nictitating membrane and compare this to eye muscle insertions. To align the retinotopic and visuotopic coordinate systems in the mouse, we utilised the geometry of binocular vision. In standard retinal space, the composite decussation line for the uncrossed retinal projection is located 64° away from the retinal pole. Projecting anatomically defined uncrossed retinal projections into visual space gives binocular congruence if the optical axis of the mouse eye is oriented at 64° azimuth and 22° elevation, in concordance with previous results. Moreover, using these coordinates, the dorsoventral boundary for S-opsin expressing cones closely matches the horizontal meridian

    Toward standard practices for sharing computer code and programs in neuroscience

    Get PDF
    Computational techniques are central in many areas of neuroscience and are relatively easy to share. This paper describes why computer programs underlying scientific publications should be shared and lists simple steps for sharing. Together with ongoing efforts in data sharing, this should aid reproducibility of research.This article is based on discussions from a workshop to encourage sharing in neuroscience, held in Cambridge, UK, December 2014. It was financially supported and organized by the International Neuroinformatics Coordinating Facility (http://www.incf.org), with additional support from the Software Sustainability institute (http://www.software.ac.uk). M.H. was supported by funds from the German federal state of Saxony-Anhalt and the European Regional Development Fund (ERDF), Project: Center for Behavioral Brain Sciences

    Learning Shapes Spontaneous Activity Itinerating over Memorized States

    Get PDF
    Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided

    A Multi-Component Model of the Developing Retinocollicular Pathway Incorporating Axonal and Synaptic Growth

    Get PDF
    During development, neurons extend axons to different brain areas and produce stereotypical patterns of connections. The mechanisms underlying this process have been intensively studied in the visual system, where retinal neurons form retinotopic maps in the thalamus and superior colliculus. The mechanisms active in map formation include molecular guidance cues, trophic factor release, spontaneous neural activity, spike-timing dependent plasticity (STDP), synapse creation and retraction, and axon growth, branching and retraction. To investigate how these mechanisms interact, a multi-component model of the developing retinocollicular pathway was produced based on phenomenological approximations of each of these mechanisms. Core assumptions of the model were that the probabilities of axonal branching and synaptic growth are highest where the combined influences of chemoaffinity and trophic factor cues are highest, and that activity-dependent release of trophic factors acts to stabilize synapses. Based on these behaviors, model axons produced morphologically realistic growth patterns and projected to retinotopically correct locations in the colliculus. Findings of the model include that STDP, gradient detection by axonal growth cones and lateral connectivity among collicular neurons were not necessary for refinement, and that the instructive cues for axonal growth appear to be mediated first by molecular guidance and then by neural activity. Although complex, the model appears to be insensitive to variations in how the component developmental mechanisms are implemented. Activity, molecular guidance and the growth and retraction of axons and synapses are common features of neural development, and the findings of this study may have relevance beyond organization in the retinocollicular pathway

    Dendritic Morphology Predicts Pattern Recognition Performance in Multi-compartmental Model Neurons with and without Active Conductances

    Get PDF
    This is an Open Access article published under the Creative Commons Attribution license CC BY 4.0 which allows users to read, copy, distribute and make derivative works, as long as the author of the original work is citedIn this paper we examine how a neuron’s dendritic morphology can affect its pattern recognition performance. We use two different algorithms to systematically explore the space of dendritic morphologies: an algorithm that generates all possible dendritic trees with 22 terminal points, and one that creates representative samples of trees with 128 terminal points. Based on these trees, we construct multi-compartmental models. To assess the performance of the resulting neuronal models, we quantify their ability to discriminate learnt and novel input patterns. We find that the dendritic morphology does have a considerable effect on pattern recognition performance and that the neuronal performance is inversely correlated with the mean depth of the dendritic tree. The results also reveal that the asymmetry index of the dendritic tree does not correlate with the performance for the full range of tree morphologies. The performance of neurons with dendritic tapering is best predicted by the mean and variance of the electrotonic distance of their synapses to the soma. All relationships found for passive neuron models also hold, even in more accentuated form, for neurons with active membranesPeer reviewedFinal Published versio
    • …
    corecore